1,174 research outputs found

    Hedge funds: an industry in its adolescence

    Get PDF
    The dramatic increase in the number of hedge funds and the "institutionalization" of the industry over the past decade have spurred rigorous research into hedge fund performance. This research has tended to uncover more questions than answers about the dynamic and multifaceted hedge fund industry. ; This article presents a simple hedge fund business model in which fund returns are a function of three key elements -- how the funds trade, where they trade, and how the positions are financed. The article also provides methods to help investors, intermediaries, and regulators identify systemic risk factors inherent in hedge fund strategies. ; Estimating these risk factors requires having an accurate history of hedge fund performance. The authors examine recent statistics from three commercial hedge fund databases and discuss the problems with database biases that must be recognized to obtain accurate measures of returns. ; While the data show that today's hedge funds use myriad strategies that have no uniform definition, the proposed business model implies that hedge fund managers are diversifying in order to maximize the enterprise value of their firms. But this diversification does not preclude the risk of leveraged opinions converging onto the same set of bets. Preventing convergence risk will require action by investors, intermediaries, regulators, and fund managers to improve industry-level disclosure and transparency while preserving the privacy of individual hedge funds' positions.Hedge funds

    Disease gravity and urgency of need as guidelines for liver allocation

    Get PDF
    One thousand one hundred and twenty-eight candidates for liver transplantation were stratified into five urgency-of-need categories by the United Network for Organ Sharing (UNOS) criteria. Most patients of low-risk UNOS 1 status remained alive after 1 yr without transplantation; the mortality while waiting was 3% after a median of 229.5 days. In contrast, only 3% of those entered at the highest risk UNOS 5 category survived without transplantation; 28% died while waiting, the deaths occurring at a median of 5.5 days. The UNOS categories in between showed the expected gradations, in which at each higher level fewer patients remained as candidates throughout the 1-yr duration of study while progressively more died at earlier and earlier times while waiting for an organ. In a separate study of posttransplantation survival during the same time period, the best postoperative results were in the lowest-risk UNOS 1 and 2 patients (88% combined), and the worst results were those in UNOS 5 (71%). However, a relative risk cross-analysis showed that a negative benefit of transplantation may have been the result in terms of 1-yr survival for the low-risk elective patients, but that a gain in life extension was achieved in the potentially lethal UNOS categories 3, 4 and 5 (greatest for UNOS 3). These findings and conclusions are discussed in terms of total care of patients with liver disease, and in the context of organ allocation policies of the United States and Europe

    The sharpness of gamma-ray burst prompt emission spectra

    Get PDF
    We aim to obtain a measure of the curvature of time-resolved spectra that can be compared directly to theory. This tests the ability of models such as synchrotron emission to explain the peaks or breaks of GBM prompt emission spectra. We take the burst sample from the official Fermi GBM GRB time-resolved spectral catalog. We re-fit all spectra with a measured peak or break energy in the catalog best-fit models in various energy ranges, which cover the curvature around the spectral peak or break, resulting in a total of 1,113 spectra being analysed. We compute the sharpness angles under the peak or break of the triangle constructed under the model fit curves and compare to the values obtained from various representative emission models: blackbody, single-electron synchrotron, synchrotron emission from a Maxwellian or power-law electron distribution. We find that 35% of the time-resolved spectra are inconsistent with the single-electron synchrotron function, and 91% are inconsistent with the Maxwellian synchrotron function. The single temperature, single emission time and location blackbody function is found to be sharper than all the spectra. No general evolutionary trend of the sharpness angle is observed, neither per burst nor for the whole population. It is found that the limiting case, a single temperature Maxwellian synchrotron function, can only contribute up to 5818+2358^{+23}_{-18}% of the peak flux. Our results show that even the sharpest but non-realistic case, the single-electron synchrotron function, cannot explain a large fraction of the observed GRB prompt spectra. Because of the fact that any combination of physically possible synchrotron spectra added together will always further broaden the spectrum, emission mechanisms other than optically thin synchrotron radiation are likely required in a full explanation of the spectral peaks or breaks of the GRB prompt emission phase.Comment: 16 pages, 13 figures, 2 tables, accepted for publication in A&

    A gene regulatory network armature for T lymphocyte specification

    Get PDF
    Choice of a T lymphoid fate by hematopoietic progenitor cells depends on sustained Notch–Delta signaling combined with tightly regulated activities of multiple transcription factors. To dissect the regulatory network connections that mediate this process, we have used high-resolution analysis of regulatory gene expression trajectories from the beginning to the end of specification, tests of the short-term Notch dependence of these gene expression changes, and analyses of the effects of overexpression of two essential transcription factors, namely PU.1 and GATA-3. Quantitative expression measurements of >50 transcription factor and marker genes have been used to derive the principal components of regulatory change through which T cell precursors progress from primitive multipotency to T lineage commitment. Our analyses reveal separate contributions of Notch signaling, GATA-3 activity, and down-regulation of PU.1. Using BioTapestry (www.BioTapestry.org), the results have been assembled into a draft gene regulatory network for the specification of T cell precursors and the choice of T as opposed to myeloid/dendritic or mast-cell fates. This network also accommodates effects of E proteins and mutual repression circuits of Gfi1 against Egr-2 and of TCF-1 against PU.1 as proposed elsewhere, but requires additional functions that remain unidentified. Distinctive features of this network structure include the intense dose dependence of GATA-3 effects, the gene-specific modulation of PU.1 activity based on Notch activity, the lack of direct opposition between PU.1 and GATA-3, and the need for a distinct, late-acting repressive function or functions to extinguish stem and progenitor-derived regulatory gene expression

    Weaning of immunosuppression in liver transplant recipients

    Get PDF
    Immunosuppression has been sporadically discontinued by noncompliant liver allograft recipients for whom an additional 4 1/2 years of follow-up is provided. These anecdotal observations prompted a previously reported prospective drug withdrawal program in 59 liver recipients. This prospective series has been increased to 95 patients whose weaning was begun between June 1992 and March 1996, 8.4±4.4 (SD) years after liver replacement. A further 4 1/2 years follow-up was obtained of the 5 self-weaned patients. The prospectively weaned recipients (93 livers; 2 liver/kidney) had undergone transplantation under immunosuppression based on azathioprine (AZA, through 1979), cyclosporine (CsA, 1980-1989), or tacrolimus (TAC, 1989-1994). In patients on CsA or TAC based cocktails, the adjunct drugs were weaned first in the early part of the trial. Since 1994, the T cell-directed drugs were weaned first. Three of the 5 original self-weaned recipients remain well after drug-free intervals of 14 to 17 years. A fourth patient died in a vehicular accident after 11 years off immunosuppression, and the fifth patient underwent retransplantation because of hepatitis C infection after 9 drug-free years; their allografts had no histopathologic evidence of rejection. Eighteen (19%) of the 95 patients in the prospective series have been drug free for from 10 months to 4.8 years. In the total group, 18 (19%) have had biopsy proved acute rejection; 7 (7%) had a presumed acute rejection without biopsy; 37 (39%) are still weaning; and 12 (13%, all well) were withdrawn from the protocol at reduced immunosuppression because of noncompliance (n=8), recurrent PBC (n=2), pregnancy (n=1), and renal failure necessitating kidney transplantation (n=1). No patients were formally diagnosed with chronic rejection, but 3 (3%) were placed back on preexisting immunosuppression or switched from cyclosporine (CsA) to tacrolimus (TAC) because of histopathologic evidence of duct injury. Two patients with normal liver function died during the trial, both from complications of prior chronic immunosuppression. No grafts suffered permanent functional impairment and only one patient developed temporary jaundice. Long surviving liver transplant recipients are systematically overimmunosuppressed. Consequently, drug weaning, whether incomplete or complete, is an important management strategy providing it is done slowly under careful physician surveillance. Complete weaning from CsA-based regimens has been difficult. Disease recurrence during drug withdrawal was documented in 2 of 13 patients with PBC and could be a risk with other autoimmune disorders

    Coronary Risk Assessment by Point-Based vs. Equation-Based Framingham Models: Significant Implications for Clinical Care

    Get PDF
    US cholesterol guidelines use original and simplified versions of the Framingham model to estimate future coronary risk and thereby classify patients into risk groups with different treatment strategies. We sought to compare risk estimates and risk group classification generated by the original, complex Framingham model and the simplified, point-based version. We assessed 2,543 subjects age 20–79 from the 2001–2006 National Health and Nutrition Examination Surveys (NHANES) for whom Adult Treatment Panel III (ATP-III) guidelines recommend formal risk stratification. For each subject, we calculated the 10-year risk of major coronary events using the original and point-based Framingham models, and then compared differences in these risk estimates and whether these differences would place subjects into different ATP-III risk groups (<10% risk, 10–20% risk, or >20% risk). Using standard procedures, all analyses were adjusted for survey weights, clustering, and stratification to make our results nationally representative. Among 39 million eligible adults, the original Framingham model categorized 71% of subjects as having “moderate” risk (<10% risk of a major coronary event in the next 10 years), 22% as having “moderately high” (10–20%) risk, and 7% as having “high” (>20%) risk. Estimates of coronary risk by the original and point-based models often differed substantially. The point-based system classified 15% of adults (5.7 million) into different risk groups than the original model, with 10% (3.9 million) misclassified into higher risk groups and 5% (1.8 million) into lower risk groups, for a net impact of classifying 2.1 million adults into higher risk groups. These risk group misclassifications would impact guideline-recommended drug treatment strategies for 25–46% of affected subjects. Patterns of misclassifications varied significantly by gender, age, and underlying CHD risk. Compared to the original Framingham model, the point-based version misclassifies millions of Americans into risk groups for which guidelines recommend different treatment strategies
    corecore